Goto

Collaborating Authors

 algorithmic fairness



Algorithmic Fairness in AI Surrogates for End-of-Life Decision-Making

Ahmad, Muhammad Aurangzeb

arXiv.org Artificial Intelligence

Artificial intelligence surrogates are systems designed to infer preferences when individuals lose decision-making capacity. Fairness in such systems is a domain that has been insufficiently explored. Traditional algorithmic fairness frameworks are insufficient for contexts where decisions are relational, existential, and culturally diverse. This paper explores an ethical framework for algorithmic fairness in AI surrogates by mapping major fairness notions onto potential real-world end-of-life scenarios. It then examines fairness across moral traditions. The authors argue that fairness in this domain extends beyond parity of outcomes to encompass moral representation, fidelity to the patient's values, relationships, and worldview.




A Feminist Account of Intersectional Algorithmic Fairness

Mirsch, Marie, Wegner, Laila, Strube, Jonas, Leicht-Scholten, Carmen

arXiv.org Artificial Intelligence

Intersectionality has profoundly influenced research and political action by revealing how interconnected systems of privilege and oppression influence lived experiences, yet its integration into algorithmic fairness research remains limited. Existing approaches often rely on single - axis or formal subgroup frameworks that risk oversimplifying social realities and neglecting structural inequalities. We propose Substantive Intersectional Algorithmic Fairness, extending Green's (2022) notion of substantive algorithmic fairness with insights from intersectional feminist theory. Buil ding on this foundation, we introduce ten desiderata within the ROOF methodology to guide the design, assessment, and deployment of algorithmic systems in ways that address systemic inequities while mitigating harms to intersectionally marginalized communi ties . Rather than prescribing fixed operationalizations, these desiderata encourage reflection on assumptions of neutrality, the use of protect ed attributes, the inclusion of multiply marginalized groups, and enhancing algorithmic systems' potential. Our a pproach emphasizes that fairness cannot be separated from social context, and that in some cases, principled non - deployment may be necessary. By bridging computational and social science perspectives, we provide actionable guidance for more equitable, incl usive, and context - sensitive intersectional algorithmic practices.



Competing Risks: Impact on Risk Estimation and Algorithmic Fairness

Jeanselme, Vincent, Tom, Brian, Barrett, Jessica

arXiv.org Artificial Intelligence

Accurate time-to-event prediction is integral to decision-making, informing medical guidelines, hiring decisions, and resource allocation. Survival analysis, the quantitative framework used to model time-to-event data, accounts for patients who do not experience the event of interest during the study period, known as censored patients. However, many patients experience events that prevent the observation of the outcome of interest. These competing risks are often treated as censoring, a practice frequently overlooked due to a limited understanding of its consequences. Our work theoretically demonstrates why treating competing risks as censoring introduces substantial bias in survival estimates, leading to systematic overestimation of risk and, critically, amplifying disparities. First, we formalize the problem of misclassifying competing risks as censoring and quantify the resulting error in survival estimates. Specifically, we develop a framework to estimate this error and demonstrate the associated implications for predictive performance and algorithmic fairness. Furthermore, we examine how differing risk profiles across demographic groups lead to group-specific errors, potentially exacerbating existing disparities. Our findings, supported by an empirical analysis of cardiovascular management, demonstrate that ignoring competing risks disproportionately impacts the individuals most at risk of these events, potentially accentuating inequity. By quantifying the error and highlighting the fairness implications of the common practice of considering competing risks as censoring, our work provides a critical insight into the development of survival models: practitioners must account for competing risks to improve accuracy, reduce disparities in risk assessment, and better inform downstream decisions.


Algorithmic Fairness: A Runtime Perspective

Cano, Filip, Henzinger, Thomas A., Kueffner, Konstantin

arXiv.org Artificial Intelligence

Fairness in AI is traditionally studied as a static property evaluated once, over a fixed dataset. However, real-world AI systems operate sequentially, with outcomes and environments evolving over time. This paper proposes a framework for analysing fairness as a runtime property. Using a minimal yet expressive model based on sequences of coin tosses with possibly evolving biases, we study the problems of monitoring and enforcing fairness expressed in either toss outcomes or coin biases. Since there is no one-size-fits-all solution for either problem, we provide a summary of monitoring and enforcement strategies, parametrised by environment dynamics, prediction horizon, and confidence thresholds. For both problems, we present general results under simple or minimal assumptions. We survey existing solutions for the monitoring problem for Markovian and additive dynamics, and existing solutions for the enforcement problem in static settings with known dynamics.


The Illusion of Fairness: Auditing Fairness Interventions with Audit Studies

Sariola, Disa, Button, Patrick, Culotta, Aron, Mattei, Nicholas

arXiv.org Artificial Intelligence

Artificial intelligence systems, especially those using machine learning, are being deployed in domains from hiring to loan issuance in order to automate these complex decisions. Judging both the effectiveness and fairness of these AI systems, and their human decision making counterpart, is a complex and important topic studied across both computational and social sciences. Within machine learning, a common way to address bias in downstream classifiers is to resample the training data to offset disparities. For example, if hiring rates vary by some protected class, then one may equalize the rate within the training set to alleviate bias in the resulting classifier. While simple and seemingly effective, these methods have typically only been evaluated using data obtained through convenience samples, introducing selection bias and label bias into metrics. Within the social sciences, psychology, public health, and medicine, audit studies, in which fictitious ``testers'' (e.g., resumes, emails, patient actors) are sent to subjects (e.g., job openings, businesses, doctors) in randomized control trials, provide high quality data that support rigorous estimates of discrimination. In this paper, we investigate how data from audit studies can be used to improve our ability to both train and evaluate automated hiring algorithms. We find that such data reveals cases where the common fairness intervention method of equalizing base rates across classes appears to achieve parity using traditional measures, but in fact has roughly 10% disparity when measured appropriately. We additionally introduce interventions based on individual treatment effect estimation methods that further reduce algorithmic discrimination using this data.


OxonFair: A Flexible Toolkit for Algorithmic Fairness

Neural Information Processing Systems

Compared to existing toolkits: (i) We support NLP and Computer Vision classification as well as standard tabular problems. This makes it easily extensible and much more expressive than existing toolkits. It supports all 9 and all 10 of the decision-based group metrics of two popular review articles. This minimizes degradation while enforcing fairness, and even improves the performance of inadequately tuned unfair baselines.